1 research outputs found

    Auto-Calibration and Three-Dimensional Reconstruction for Zooming Cameras

    Get PDF
    This dissertation proposes new algorithms to recover the calibration parameters and 3D structure of a scene, using 2D images taken by uncalibrated stationary zooming cameras. This is a common configuration, usually encountered in surveillance camera networks, stereo camera systems, and event monitoring vision systems. This problem is known as camera auto-calibration (also called self-calibration) and the motivation behind this work is to obtain the Euclidean three-dimensional reconstruction and metric measurements of the scene, using only the captured images. Under this configuration, the problem of auto-calibrating zooming cameras differs from the classical auto-calibration problem of a moving camera in two major aspects. First, the camera intrinsic parameters are changing due to zooming. Second, because cameras are stationary in our case, using classical motion constraints, such as a pure translation for example, is not possible. In order to simplify the non-linear complexity of this problem, i.e., auto-calibration of zooming cameras, we have followed a geometric stratification approach. In particular, we have taken advantage of the movement of the camera center, that results from the zooming process, to locate the plane at infinity and, consequently to obtain an affine reconstruction. Then, using the assumption that typical cameras have rectangular or square pixels, the calculation of the camera intrinsic parameters have become possible, leading to the recovery of the Euclidean 3D structure. Being linear, the proposed algorithms were easily extended to the case of an arbitrary number of images and cameras. Furthermore, we have devised a sufficient constraint for detecting scene parallel planes, a useful information for solving other computer vision problems
    corecore